8 research outputs found

    Simulating Vocal Imitation in Infants, using a Growth Articulatory Model and Speech Robotics

    Get PDF
    In order to shed lights on the cognitive representations likely to underlie early vocal imitation, we tried to simulate Kuhl and Meltzoff's experiment (1996), using Bayesian robotics and a statistical model of the vocal tract that had been fitted to pre-babblers' actual vocalizations. It was shown that audition is compulsory to account for infants' early vocal imitation performance, inasmuch as the simulation of purely visual imitation failed to reproduce infants' score and pattern of imitation. Further, a small number of vocalizations (less than 100!) appeared to be enough for a learning process to provide scores at least as high as those of pre-babblers. Thus, early vocal imitation lies in the reach of a baby robot, with only a few assumptions about learning and imitation

    Multimedia signal processing for behavioral quantification in neuroscience

    Get PDF
    While there have been great advances in quantification of the genotype of organisms, including full genomes for many species, the quantification of phenotype is at a comparatively primitive stage. Part of the reason is technical difficulty: the phenotype covers a wide range of characteristics, ranging from static morphological features, to dynamic behavior. The latter poses challenges that are in the area of multimedia signal processing. Automated analysis of video and audio recordings of animal and human behavior is a growing area of research, ranging from the behavioral phenotyping of genetically modified mice or drosophila to the study of song learning in birds and speech acquisition in human infants. This paper reviews recent advances and identifies key problems for a range of behavior experiments that use audio and video recording. This research area offers both research challenges and an application domain for advanced multimedia signal processing. There are a number of MMSP tools that now exist which are directly relevant for behavioral quantification, such as speech recognition, video analysis and more recently, wired and wireless sensor networks for surveillance. The research challenge is to adapt these tools and to develop new ones required for studying human and animal behavior in a high throughput manner while minimizing human intervention. In contrast with consumer applications, in the research arena there is less of a penalty for computational complexity, so that algorithmic quality can be maximized through the utilization of larger computational resources that are available to the biomedical researcher

    Bayesian Action–Perception Computational Model: Interaction of Production and Recognition of Cursive Letters

    Get PDF
    In this paper, we study the collaboration of perception and action representations involved in cursive letter recognition and production. We propose a mathematical formulation for the whole perception–action loop, based on probabilistic modeling and Bayesian inference, which we call the Bayesian Action–Perception (BAP) model. Being a model of both perception and action processes, the purpose of this model is to study the interaction of these processes. More precisely, the model includes a feedback loop from motor production, which implements an internal simulation of movement. Motor knowledge can therefore be involved during perception tasks. In this paper, we formally define the BAP model and show how it solves the following six varied cognitive tasks using Bayesian inference: i) letter recognition (purely sensory), ii) writer recognition, iii) letter production (with different effectors), iv) copying of trajectories, v) copying of letters, and vi) letter recognition (with internal simulation of movements). We present computer simulations of each of these cognitive tasks, and discuss experimental predictions and theoretical developments
    corecore